Deep ReLU neural networks in high-dimensional approximation

نویسندگان

چکیده

We study the computation complexity of deep ReLU (Rectified Linear Unit) neural networks for approximation functions from Hölder–Zygmund space mixed smoothness defined on d-dimensional unit cube when dimension d may be very large. The error is measured in norm isotropic Sobolev space. For every function f smoothness, we explicitly construct a network having an output that approximates with prescribed accuracy ɛ, and prove tight dimension-dependent upper lower bounds approximation, characterized as size depth this network, ɛ. proof these results particular, relies by sparse-grid sampling recovery based Faber series.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal approximation of continuous functions by very deep ReLU networks

We prove that deep ReLU neural networks with conventional fully-connected architectures with W weights can approximate continuous ν-variate functions f with uniform error not exceeding aνωf (cνW −2/ν), where ωf is the modulus of continuity of f and aν , cν are some ν-dependent constants. This bound is tight. Our construction is inherently deep and nonlinear: the obtained approximation rate cann...

متن کامل

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in an L-sense. As a model class, we consider the set E(R) of possibly discontinuous piecewise C functions f : [−1/2, 1/2] → R, where the different “smooth regions” of f are separated by C hypersurfaces. For given dimension d ≥ ...

متن کامل

Nonparametric regression using deep neural networks with ReLU activation function

Consider the multivariate nonparametric regression model. It is shown that estimators based on sparsely connected deep neural networks with ReLU activation function and properly chosen network architecture achieve the minimax rates of convergence (up to log n-factors) under a general composition assumption on the regression function. The framework includes many well-studied structural constrain...

متن کامل

Provable approximation properties for deep neural networks

We discuss approximation of functions using deep neural nets. Given a function f on a d-dimensional manifold Γ ⊂ R, we construct a sparsely-connected depth-4 neural network and bound its error in approximating f . The size of the network depends on dimension and curvature of the manifold Γ, the complexity of f , in terms of its wavelet description, and only weakly on the ambient dimension m. Es...

متن کامل

Why Deep Neural Networks for Function Approximation?

Recently there has been much interest in understanding why deep neural networks are preferred to shallow networks. We show that, for a large class of piecewise smooth functions, the number of neurons needed by a shallow network to approximate a function is exponentially larger than the corresponding number of neurons needed by a deep network for a given degree of function approximation. First, ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2021

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2021.07.027